张张量强大的主成分分析(TRPCA)旨在恢复因稀疏噪声破坏的低排名张量,在许多真实应用中引起了很多关注。本文开发了一种新的全球加权TRPCA方法(GWTRPCA),该方法是第一种同时考虑额外域内切片和额叶间切片奇异值的重要性。利用这些全球信息,GWTRPCA惩罚了较大的单数值,并为其分配了较小的权重。因此,我们的方法可以更准确地恢复低管级组件。此外,我们提出了通过改良的考奇估计量(MCE)的有效自适应学习策略,因为重量设置在GWTRPCA的成功中起着至关重要的作用。为了实现GWTRPCA方法,我们使用乘数的交替方向方法(ADMM)方法设计了一种优化算法。对现实世界数据集的实验验证了我们提出的方法的有效性。
translated by 谷歌翻译
Video representation learning has been successful in video-text pre-training for zero-shot transfer, where each sentence is trained to be close to the paired video clips in a common feature space. For long videos, given a paragraph of description where the sentences describe different segments of the video, by matching all sentence-clip pairs, the paragraph and the full video are aligned implicitly. However, such unit-level similarity measure may ignore the global temporal context over a long time span, which inevitably limits the generalization ability. In this paper, we propose a contrastive learning framework TempCLR to compare the full video and the paragraph explicitly. As the video/paragraph is formulated as a sequence of clips/sentences, under the constraint of their temporal order, we use dynamic time warping to compute the minimum cumulative cost over sentence-clip pairs as the sequence-level distance. To explore the temporal dynamics, we break the consistency of temporal order by shuffling the video clips or sentences according to the temporal granularity. In this way, we obtain the representations for clips/sentences, which perceive the temporal information and thus facilitate the sequence alignment. In addition to pre-training on the video and paragraph, our approach can also generalize on the matching between different video instances. We evaluate our approach on video retrieval, action step localization, and few-shot action recognition, and achieve consistent performance gain over all three tasks. Detailed ablation studies are provided to justify the approach design.
translated by 谷歌翻译
This work studies the multi-task functional linear regression models where both the covariates and the unknown regression coefficients (called slope functions) are curves. For slope function estimation, we employ penalized splines to balance bias, variance, and computational complexity. The power of multi-task learning is brought in by imposing additional structures over the slope functions. We propose a general model with double regularization over the spline coefficient matrix: i) a matrix manifold constraint, and ii) a composite penalty as a summation of quadratic terms. Many multi-task learning approaches can be treated as special cases of this proposed model, such as a reduced-rank model and a graph Laplacian regularized model. We show the composite penalty induces a specific norm, which helps to quantify the manifold curvature and determine the corresponding proper subset in the manifold tangent space. The complexity of tangent space subset is then bridged to the complexity of geodesic neighbor via generic chaining. A unified convergence upper bound is obtained and specifically applied to the reduced-rank model and the graph Laplacian regularized model. The phase transition behaviors for the estimators are examined as we vary the configurations of model parameters.
translated by 谷歌翻译
很少有射击对象检测(FSOD),目的是使用很少的培训示例来检测新颖的对象,最近对社区引起了极大的研究兴趣。基于度量学习的方法已证明使用基于两分支的暹罗网络对此任务有效,并计算图像区域之间的相似性和几乎没有射击示例以进行检测。但是,在以前的工作中,两个分支之间的相互作用仅在检测头中受到限制,而将其余数百个层留在单独的特征提取中。受到有关视觉变压器和视觉变压器的最新工作的启发,我们通过将交叉转换器纳入功能骨干和检测头中,提出了一种新颖的FSOD基于跨变速器的模型(FCT)。提出了不对称批次的交叉注意,以从不同批次大小的两个分支中汇总关键信息。我们的模型可以通过引入多级交互来改善两个分支之间的几个相似性学习。对Pascal VOC和MSCOCO FSOD基准测试的全面实验证明了我们模型的有效性。
translated by 谷歌翻译
少量对象检测(FSOD)旨在使用少数示例来检测从未见过的对象。通过学习如何在查询图像和少量拍摄类示例之间进行匹配,因此可以通过学习如何匹配来实现最近的改进,使得学习模型可以概括为几滴新颖的类。然而,目前,大多数基于元学习的方法分别在查询图像区域(通常是提议)和新颖类之间执行成对匹配,因此无法考虑它们之间的多个关系。在本文中,我们使用异构图卷积网络提出了一种新颖的FSOD模型。通过具有三种不同类型的边缘的所有提议和类节点之间的有效消息,我们可以获得每个类的上下文感知提案功能和查询 - 自适应,多包子增强型原型表示,这可能有助于促进成对匹配和改进的最终决赛FSOD精度。广泛的实验结果表明,我们所提出的模型表示为QA的Qa-Netwet,优于不同拍摄和评估指标下的Pascal VOC和MSCOCO FSOD基准测试的当前最先进的方法。
translated by 谷歌翻译
少量对象检测(FSOD)旨在仅使用几个例子来检测对象。如何将最先进的对象探测器适应几个拍摄域保持挑战性。对象提案是现代物体探测器中的关键成分。然而,使用现有方法对于几张拍摄类生成的提案质量远远差,而不是许多拍摄类,例如,由于错误分类或不准确的空间位置而导致的少量拍摄类丢失的框。为了解决嘈杂的提案问题,我们通过联合优化几次提案生成和细粒度的少量提案分类,提出了一种新的Meta学习的FSOD模型。为了提高几张拍摄类的提议生成,我们建议学习基于轻量级的公制学习的原型匹配网络,而不是传统的简单线性对象/非目标分类器,例如,在RPN中使用。我们具有特征融合网络的非线性分类器可以提高鉴别性原型匹配和少拍摄类的提案回忆。为了提高细粒度的少量提案分类,我们提出了一种新的细节特征对准方法,以解决嘈杂的提案和少量拍摄类之间的空间未对准,从而提高了几次对象检测的性能。同时,我们学习一个单独的R-CNN检测头,用于多射击基础类,并表现出维护基础课程知识的强大性能。我们的模型在大多数射击和指标上实现了多个FSOD基准的最先进的性能。
translated by 谷歌翻译
我们研究了很少的开放式识别(FSOR)的问题,该问题学习了一个能够快速适应新类的识别系统,具有有限的标签示例和对未知负样本的拒绝。由于数据限制,传统的大规模开放式方法对FSOR问题有效无效。当前的FSOR方法通常校准了几个弹出封闭式分类器对负样品敏感的,因此可以通过阈值拒绝它们。但是,阈值调整是一个具有挑战性的过程,因为不同的FSOR任务可能需要不同的拒绝功能。在本文中,我们提出了任务自适应的负面类别设想,以使FSOR集成阈值调整到学习过程中。具体而言,我们增加了几个封闭式分类器,并使用少量示例产生的其他负面原型。通过在负生成过程中纳入很少的类相关性,我们可以学习FSOR任务的动态拒绝边界。此外,我们将我们的方法扩展到概括的少数开放式识别(GFSOR),该识别需要在许多射击和少数类别上进行分类以及拒绝​​负样本。公共基准的广泛实验验证了我们在这两个问题上的方法。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Knowledge graphs (KG) have served as the key component of various natural language processing applications. Commonsense knowledge graphs (CKG) are a special type of KG, where entities and relations are composed of free-form text. However, previous works in KG completion and CKG completion suffer from long-tail relations and newly-added relations which do not have many know triples for training. In light of this, few-shot KG completion (FKGC), which requires the strengths of graph representation learning and few-shot learning, has been proposed to challenge the problem of limited annotated data. In this paper, we comprehensively survey previous attempts on such tasks in the form of a series of methods and applications. Specifically, we first introduce FKGC challenges, commonly used KGs, and CKGs. Then we systematically categorize and summarize existing works in terms of the type of KGs and the methods. Finally, we present applications of FKGC models on prediction tasks in different areas and share our thoughts on future research directions of FKGC.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译